language-conditioned imitation learning
Language-Conditioned Imitation Learning for Robot Manipulation Tasks
Imitation learning is a popular approach for teaching motor skills to robots. However, most approaches focus on extracting policy parameters from execution traces alone (i.e., motion trajectories and perceptual data). No adequate communication channel exists between the human expert and the robot to describe critical aspects of the task, such as the properties of the target object or the intended shape of the motion. Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent (e.g., go to the large green bowl).
SRT-H: A Hierarchical Framework for Autonomous Surgery via Language Conditioned Imitation Learning
Kim, Ji Woong, Chen, Juo-Tung, Hansen, Pascal, Shi, Lucy X., Goldenberg, Antony, Schmidgall, Samuel, Scheikl, Paul Maria, Deguet, Anton, White, Brandon M., Tsai, De Ru, Cha, Richard, Jopling, Jeffrey, Finn, Chelsea, Krieger, Axel
Research on autonomous surgery has largely focused on simple task automation in controlled environments. However, real-world surgical applications demand dexterous manipulation over extended durations and generalization to the inherent variability of human tissue. These challenges remain difficult to address using existing logic-based or conventional end-to-end learning approaches. To address this gap, we propose a hierarchical framework for performing dexterous, long-horizon surgical steps. Our approach utilizes a high-level policy for task planning and a low-level policy for generating robot trajectories. The high-level planner plans in language space, generating task-level or corrective instructions that guide the robot through the long-horizon steps and correct for the low-level policy's errors. We validate our framework through ex vivo experiments on cholecystectomy, a commonly-practiced minimally invasive procedure, and conduct ablation studies to evaluate key components of the system. Our method achieves a 100\% success rate across eight unseen ex vivo gallbladders, operating fully autonomously without human intervention. This work demonstrates step-level autonomy in a surgical procedure, marking a milestone toward clinical deployment of autonomous surgical systems.
- North America > United States > Texas > Smith County > Tyler (0.04)
- North America > United States > Maryland (0.04)
- North America > United States > Florida > Broward County > Fort Lauderdale (0.04)
- (3 more...)
- Health & Medicine > Surgery (1.00)
- Health & Medicine > Therapeutic Area > Gastroenterology (0.51)
Language-Conditioned Imitation Learning for Robot Manipulation Tasks
Imitation learning is a popular approach for teaching motor skills to robots. However, most approaches focus on extracting policy parameters from execution traces alone (i.e., motion trajectories and perceptual data). No adequate communication channel exists between the human expert and the robot to describe critical aspects of the task, such as the properties of the target object or the intended shape of the motion. Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent (e.g., "go to the large green bowl").
Review for NeurIPS paper: Language-Conditioned Imitation Learning for Robot Manipulation Tasks
Additional Feedback: "objects vary in shape, size, color" could mention here that object category varies as well. Line 127 "30000 most used English words" no UNK token needed? Are all the words in the vocabulary already, even from human users for that evaluation? Language templates come from human "experts", but no detail is given about who these experts are, or what the agreement between them as annotators is. Table 2 is confusing to read.
Review for NeurIPS paper: Language-Conditioned Imitation Learning for Robot Manipulation Tasks
The reviewers initially had concerns, especially related to feature representations. However, the reviewers agreed that the author response was well written and addressed any major concerns about the paper. There was still a sentiment that integration of the ideas could be stronger, and that more complex, realistic environments would improve the paper, but that it was strong enough to be accepted as-is (though the authors are encouraged to take the advice of the reviewers for the camera ready).
Language-Conditioned Imitation Learning for Robot Manipulation Tasks
Imitation learning is a popular approach for teaching motor skills to robots. However, most approaches focus on extracting policy parameters from execution traces alone (i.e., motion trajectories and perceptual data). No adequate communication channel exists between the human expert and the robot to describe critical aspects of the task, such as the properties of the target object or the intended shape of the motion. Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent (e.g., "go to the large green bowl").
Language-Conditioned Imitation Learning with Base Skill Priors under Unstructured Data
Zhou, Hongkuan, Bing, Zhenshan, Yao, Xiangtong, Su, Xiaojie, Yang, Chenguang, Huang, Kai, Knoll, Alois
The growing interest in language-conditioned robot manipulation aims to develop robots capable of understanding and executing complex tasks, with the objective of enabling robots to interpret language commands and manipulate objects accordingly. While language-conditioned approaches demonstrate impressive capabilities for addressing tasks in familiar environments, they encounter limitations in adapting to unfamiliar environment settings. In this study, we propose a general-purpose, language-conditioned approach that combines base skill priors and imitation learning under unstructured data to enhance the algorithm's generalization in adapting to unfamiliar environments. We assess our model's performance in both simulated and real-world environments using a zero-shot setting. In the simulated environment, the proposed approach surpasses previously reported scores for CALVIN benchmark, especially in the challenging Zero-Shot Multi-Environment setting. The average completed task length, indicating the average number of tasks the agent can continuously complete, improves more than 2.5 times compared to the state-of-the-art method HULC. In addition, we conduct a zero-shot evaluation of our policy in a real-world setting, following training exclusively in simulated environments without additional specific adaptations. In this evaluation, we set up ten tasks and achieved an average 30% improvement in our approach compared to the current state-of-the-art approach, demonstrating a high generalization capability in both simulated environments and the real world. For further details, including access to our code and videos, please refer to https://hk-zh.github.io/spil/
- Europe > Austria > Vienna (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Asia > China > Chongqing Province > Chongqing (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.76)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
- Information Technology > Human Computer Interaction > Interfaces > Virtual Reality (0.46)